leaf area
- Asia > Singapore (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination
Lidar (Light Detection and Ranging) has become an essential part of the remote sensing toolbox used for biosphere monitoring. In particular, Lidar provides the opportunity to map forest leaf area with unprecedented accuracy, while leaf area has remained an important source of uncertainty affecting models of gas exchanges between the vegetation and the atmosphere. Unmanned Aerial Vehicles (UAV) are easy to mobilize and therefore allow frequent revisits to track the response of vegetation to climate change. However, miniature sensors embarked on UAVs usually provide point clouds of limited density, which are further affected by a strong decrease in density from top to bottom of the canopy due to progressively stronger occlusion. In such a context, discriminating leaf points from wood points presents a significant challenge due in particular to strong class imbalance and spatially irregular sampling intensity.
- Asia > Singapore (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
PhenoAssistant: A Conversational Multi-Agent AI System for Automated Plant Phenotyping
Chen, Feng, Stogiannidis, Ilias, Wood, Andrew, Bueno, Danilo, Williams, Dominic, Macfarlane, Fraser, Grieve, Bruce, Wells, Darren, Atkinson, Jonathan A., Hawkesford, Malcolm J., Rolfe, Stephen A., Lawson, Tracy, Pridmore, Tony, Giuffrida, Mario Valerio, Tsaftaris, Sotirios A.
Plant phenotyping increasingly relies on (semi-)automated image-based analysis workflows to improve its accuracy and scalability. However, many existing solutions remain overly complex, difficult to reimplement and maintain, and pose high barriers for users without substantial computational expertise. To address these challenges, we introduce PhenoAssistant: a pioneering AI-driven system that streamlines plant phenotyping via intuitive natural language interaction. PhenoAssistant leverages a large language model to orchestrate a curated toolkit supporting tasks including automated phenotype extraction, data visualisation and automated model training. We validate PhenoAssistant through several representative case studies and a set of evaluation tasks. By significantly lowering technical hurdles, PhenoAssistant underscores the promise of AI-driven methodologies to democratising AI adoption in plant biology.
- Europe > United Kingdom > Scotland (0.04)
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.04)
- North America > Mexico > Gulf of Mexico (0.04)
- Europe > United Kingdom > England > Leicestershire > Loughborough (0.04)
- Research Report (1.00)
- Workflow (0.68)
Using 3D reconstruction from image motion to predict total leaf area in dwarf tomato plants
Usenko, Dmitrii, Helman, David, Giladi, Chen
Accurate estimation of total leaf area (TLA) is essential for assessing plant growth, photosynthetic activity, and transpiration but remains a challenge for bushy plants like dwarf tomatoes. Traditional destructive methods and imaging-based techniques often fall short due to labor intensity, plant damage, or the inability to capture complex canopies. This study evaluated a non-destructive method combining sequential 3D reconstructions from RGB images and machine learning to estimate TLA for three dwarf tomato cultivars-- Mohamed, Hahms Gelbe Topftomate, and Red Robin--grown under controlled greenhouse conditions. Two experiments, conducted in spring-summer and autumn-winter, included 73 plants, yielding 418 TLA measurements using an "onion" approach, where layers of leaves were sequentially removed and scanned. High-resolution videos were recorded from multiple angles for each plant, and 500 frames were extracted per plant for 3D reconstruction. Point clouds were created and processed, four reconstruction algorithms (Alpha Shape, Marching Cubes, Poisson's, and Ball Pivoting) were tested, and meshes were evaluated using seven regression models: Multivariable Linear Regression (MLR), Lasso Regression (Lasso), Ridge Regression (Ridge-Reg), Elastic Net Regression (ENR), Random Forest (RF), extreme gradient boosting (XGBoost), and Multilayer Perceptron (MLP). The Alpha Shape reconstruction (α = 3) combined with XGBoost yielded the best performance, achieving an R of 0.80 and MAE of 489 cm These findings demonstrate the robustness of our approach across variable environmental conditions and canopy structures. This scalable, automated TLA estimation method is particularly suited for urban farming and precision agriculture, offering practical implications for automated pruning, improved resource efficiency, and sustainable food production. Keywords: Total leaf area, dwarf tomato, point cloud, mesh reconstruction, machine learning, precision agriculture 1. Introduction Total leaf area (TLA) is a comprehensive metric describing the plant's growth and functioning. It is a primary metric that describes the plant's photosynthetic activity and transpiration capacity. Normalized by the plant's surface area, TLA may provide information on the canopy structure, which is crucial for understanding the plant's energy and resource efficiency. For example, reduced TLA is a sign of stress (Dong et al., 2019), while excessive biomass, indicated by a higher TLA, signifies lower water use efficiency (Glenn et al., 2006). Farmers often use pruning to reduce TLA in commercial crops to increase crop productivity (Budiarto et al., 2023). However, measuring and finding the optimum TLA of the crop are challenging tasks.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Israel > Southern District > Ashdod (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.46)
Deep Learning-Based Direct Leaf Area Estimation using Two RGBD Datasets for Model Development
Jayasuriya, Namal, Guo, Yi, Hu, Wen, Ghannoum, Oula
Estimation of a single leaf area can be a measure of crop growth and a phenotypic trait to breed new varieties. It has also been used to measure leaf area index and total leaf area. Some studies have used hand-held cameras, image processing 3D reconstruction and unsupervised learning-based methods to estimate the leaf area in plant images. Deep learning works well for object detection and segmentation tasks; however, direct area estimation of objects has not been explored. This work investigates deep learning-based leaf area estimation, for RGBD images taken using a mobile camera setup in real-world scenarios. A dataset for attached leaves captured with a top angle view and a dataset for detached single leaves were collected for model development and testing. First, image processing-based area estimation was tested on manually segmented leaves. Then a Mask R-CNN-based model was investigated, and modified to accept RGBD images and to estimate the leaf area. The detached-leaf data set was then mixed with the attached-leaf plant data set to estimate the single leaf area for plant images, and another network design with two backbones was proposed: one for segmentation and the other for area estimation. Instead of trying all possibilities or random values, an agile approach was used in hyperparameter tuning. The final model was cross-validated with 5-folds and tested with two unseen datasets: detached and attached leaves. The F1 score with 90% IoA for segmentation result on unseen detached-leaf data was 1.0, while R-squared of area estimation was 0.81. For unseen plant data segmentation, the F1 score with 90% IoA was 0.59, while the R-squared score was 0.57. The research suggests using attached leaves with ground truth area to improve the results.
Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination
Lidar (Light Detection and Ranging) has become an essential part of the remote sensing toolbox used for biosphere monitoring. In particular, Lidar provides the opportunity to map forest leaf area with unprecedented accuracy, while leaf area has remained an important source of uncertainty affecting models of gas exchanges between the vegetation and the atmosphere. Unmanned Aerial Vehicles (UAV) are easy to mobilize and therefore allow frequent revisits to track the response of vegetation to climate change. However, miniature sensors embarked on UAVs usually provide point clouds of limited density, which are further affected by a strong decrease in density from top to bottom of the canopy due to progressively stronger occlusion. In such a context, discriminating leaf points from wood points presents a significant challenge due in particular to strong class imbalance and spatially irregular sampling intensity.
Field Robot for High-throughput and High-resolution 3D Plant Phenotyping
Esser, Felix, Rosu, Radu Alexandru, Cornelißen, André, Klingbeil, Lasse, Kuhlmann, Heiner, Behnke, Sven
With the need to feed a growing world population, the efficiency of crop production is of paramount importance. To support breeding and field management, various characteristics of the plant phenotype need to be measured -- a time-consuming process when performed manually. We present a robotic platform equipped with multiple laser and camera sensors for high-throughput, high-resolution in-field plant scanning. We create digital twins of the plants through 3D reconstruction. This allows the estimation of phenotypic traits such as leaf area, leaf angle, and plant height. We validate our system on a real field, where we reconstruct accurate point clouds and meshes of sugar beet, soybean, and maize.
- Europe > Germany (0.04)
- North America > United States > North Dakota > Williams County (0.04)
- North America > United States > Gulf of Mexico > Central GOM (0.04)
- Africa > Middle East > Libya > Murzuq District (0.04)
Batch Bayesian Optimization for Replicable Experimental Design
Dai, Zhongxiang, Nguyen, Quoc Phong, Tay, Sebastian Shenghong, Urano, Daisuke, Leong, Richalynn, Low, Bryan Kian Hsiang, Jaillet, Patrick
Many real-world experimental design problems (a) evaluate multiple experimental conditions in parallel and (b) replicate each condition multiple times due to large and heteroscedastic observation noise. Given a fixed total budget, this naturally induces a trade-off between evaluating more unique conditions while replicating each of them fewer times vs. evaluating fewer unique conditions and replicating each more times. Moreover, in these problems, practitioners may be risk-averse and hence prefer an input with both good average performance and small variability. To tackle both challenges, we propose the Batch Thompson Sampling for Replicable Experimental Design (BTS-RED) framework, which encompasses three algorithms. Our BTS-RED-Known and BTS-RED-Unknown algorithms, for, respectively, known and unknown noise variance, choose the number of replications adaptively rather than deterministically such that an input with a larger noise variance is replicated more times. As a result, despite the noise heteroscedasticity, both algorithms enjoy a theoretical guarantee and are asymptotically no-regret. Our Mean-Var-BTS-RED algorithm aims at risk-averse optimization and is also asymptotically no-regret. We also show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
- Asia > Singapore (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Temporal Prediction and Evaluation of Brassica Growth in the Field using Conditional Generative Adversarial Networks
Drees, Lukas, Junker-Frohn, Laura Verena, Kierdorf, Jana, Roscher, Ribana
Farmers frequently assess plant growth and performance as basis for making decisions when to take action in the field, such as fertilization, weed control, or harvesting. The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors. This paper proposes a novel monitoring approach that comprises high-throughput imaging sensor measurements and their automatic analysis to predict future plant growth. Our approach's core is a novel machine learning-based generative growth model based on conditional generative adversarial networks, which is able to predict the future appearance of individual plants. In experiments with RGB time-series images of laboratory-grown Arabidopsis thaliana and field-grown cauliflower plants, we show that our approach produces realistic, reliable, and reasonable images of future growth stages. The automatic interpretation of the generated images through neural network-based instance segmentation allows the derivation of various phenotypic traits that describe plant growth.
- Europe > United Kingdom > Wales > Ceredigion > Aberystwyth (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Montana (0.04)
- (2 more...)